VQ VAE


Vector-quantized variational autoencoder (VQ VAE) is a generative model that uses vector quantization to learn discrete latent representations.

Vector Quantized Latent Concepts: A Scalable Alternative to Clustering-Based Concept Discovery

Add code
Feb 02, 2026
Viaarxiv icon

VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

Add code
Feb 02, 2026
Viaarxiv icon

ConLA: Contrastive Latent Action Learning from Human Videos for Robotic Manipulation

Add code
Jan 31, 2026
Viaarxiv icon

Is Hierarchical Quantization Essential for Optimal Reconstruction?

Add code
Jan 29, 2026
Viaarxiv icon

iFSQ: Improving FSQ for Image Generation with 1 Line of Code

Add code
Jan 27, 2026
Viaarxiv icon

Class-Partitioned VQ-VAE and Latent Flow Matching for Point Cloud Scene Generation

Add code
Jan 18, 2026
Viaarxiv icon

TimeMar: Multi-Scale Autoregressive Modeling for Unconditional Time Series Generation

Add code
Jan 16, 2026
Viaarxiv icon

TokenSeg: Efficient 3D Medical Image Segmentation via Hierarchical Visual Token Compression

Add code
Jan 08, 2026
Viaarxiv icon

Hierarchical Vector-Quantized Latents for Perceptual Low-Resolution Video Compression

Add code
Dec 31, 2025
Viaarxiv icon

SafeMo: Linguistically Grounded Unlearning for Trustworthy Text-to-Motion Generation

Add code
Jan 02, 2026
Viaarxiv icon